You can steal portions of closed language models such as the embeddings layer just by using their public APIs. This can be done for a modest budget of less than $2,000.
Wednesday, March 13, 2024Researchers have created a generative AI worm called Morris II that can attack AI systems like ChatGPT, spreading autonomously while potentially stealing data. The worm uses “adversarial self-replicating prompts” to perpetuate and compromise AI email assistants, highlighting new cyberattack risks within the AI ecosystem. Security experts urge AI developers to take potential AI-driven threats seriously as AI applications become more autonomous and interconnected.
Anthropic developed a technique to jailbreak long context models. It has shared these findings with other organizations and implemented mitigations. This post outlines the technique and some of the things it did to defend against the technique.
OpenAI has outlined the security architecture of its AI training supercomputers, emphasizing the protection of sensitive model weights and other assets using Azure-based infrastructure and Kubernetes for orchestration.
In this session, you'll learn how to train an animal detection model and run it on Chainguard's AI Image, use lightweight container images to minimize the AI attack surface, and deploy AI frameworks like PyTorch with 0 CVEs from day one.